Computing professionals’ actions change the world. To act responsibly, they should reflect upon the wider impacts of their work, consistently supporting the public good. The ACM Code of Ethics and Professional Conduct (“the Code”) expresses the conscience of the profession.
The Code is designed to inspire and guide the ethical conduct of all computing professionals, including current and aspiring practitioners, instructors, students, influencers, and anyone who uses computing technology in an impactful way.
### PROFESSIONAL RESPONSIBILITIES
Software Engineering Code of Ethics and Professional Practice (Version 5.2) as recommended by the ACM/IEEE-CS Joint Task Force on Software Engineering Ethics and Professional Practices and jointly approved by the ACM and the IEEE-CS as the standard for teaching and practicing software engineering.
In accordance with their commitment to the health, safety and welfare of the public, software engineers shall adhere to the following Eight Principles:
PUBLIC – Software engineers shall act consistently with the public interest.
CLIENT AND EMPLOYER – Software engineers shall act in a manner that is in the best interests of their client and employer consistent with the public interest.
PRODUCT – Software engineers shall ensure that their products and related modifications meet the highest professional standards possible.
JUDGMENT – Software engineers shall maintain integrity and independence in their professional judgment.
MANAGEMENT – Software engineering managers and leaders shall subscribe to and promote an ethical approach to the management of software development and maintenance.
PROFESSION – Software engineers shall advance the integrity and reputation of the profession consistent with the public interest.
COLLEAGUES – Software engineers shall be fair to and supportive of their colleagues.
SELF – Software engineers shall participate in lifelong learning regarding the practice of their profession and shall promote an ethical approach to the practice of the profession.
If you write code for a living, there’s a chance that at some point in your career, someone will ask you to code something a little deceitful – if not outright unethical.
Doing business in China is good for shareholders, bad for humanity
It’s no mystery why Google executives want to do business with Chinese government officials: It’s profitable. With its population at 1.3 billion, China has the largest number of internet users in the world, so breaking into the Chinese market has been a long-time goal for Silicon Valley tech giants in their quest to find new users and to grow profits.
But working in China inevitably raises ethical issues for any US company. Doing business in mainland China means making deals with an authoritarian government that has a record of human rights abuses and a strict suppression of speech.
Google is committing to not using artificial intelligence for weapons or surveillance after employees protested the company’s involvement in Project Maven, a Pentagon pilot program that uses artificial intelligence to analyze drone footage.
Google CEO Sundar Pichai announced the change in a set of AI principles released today. The principles are intended to govern Google’s use of artificial intelligence and are a response to employee pressure on the company to create guidelines for its use of AI.
Employees at the company have spent months protesting Google’s involvement in Project Maven, sending a letter to Pichai demanding that Google terminate its contract with the Department of Defense. Several employees even resigned in protest, concerned that Google was aiding the development of autonomous weapons systems.
Azim Shariff, an assistant professor of psychology and social behavior at the University of California, Irvine, co-authored a study last year that found that while respondents generally agreed that a car should, in the case of an inevitable crash, kill the fewest number of people possible regardless of whether they were passengers or people outside of the car, they were less likely to buy any car “in which they and their family member would be sacrificed for the greater good.”
Self-driving cars could save tens of thousands of lives each year, Shariff said. But individual fears could slow down acceptance, leaving traditional cars and their human drivers on the road longer to battle it out with autonomous or semi-autonomous cars. Already, the American Automobile Association says three-quarters of U.S. drivers are suspicious of self-driving vehicles.
“These ethical problems are not just theoretical,” said Patrick Lin, director of the Ethics and Emerging Sciences Group at California Polytechnic State University, who has worked with Ford, Tesla and other autonomous vehicle makers on just such issues.
“The greater challenge is the artificial intelligence behind the machine,” Toyota Canada president Larry Hutchinson said, addressing the TalkAuto Conference in Toronto last November. “Think of the millions of situations that we process and decisions that we have to make in real traffic. … We need to program that intelligence into a vehicle, but we don’t have the data yet to create a machine that can perceive and respond to the virtually endless permutations of near misses and random occurrences that happen on even a simple trip to the corner store.”
In principle, any computerized system that has an interface to the outside world is potentially hackable. Any computer scientist knows that it is very difficult to create software without any bugs—especially when the software is very complex. Bugs may sometimes be security vulnerabilities, and may be exploitable.
We put our responsible AI principles into practice through the Office of Responsible AI (ORA), the AI, Ethics, and Effects in Engineering and Research (Aether) Committee, and Responsible AI Strategy in Engineering (RAISE). The Aether Committee advises our leadership on the challenges and opportunities presented by AI innovations. ORA sets our rules and governance processes, working closely with teams across the company to enable the effort. RAISE is a team that enables the implementation of Microsoft responsible AI rules across engineering groups.
As technologists, it’s only natural that we spend most of our time focusing on how our tech will change the world for the better. Which is great. Everyone loves a sunny disposition. But perhaps it’s more useful, in some ways, to consider the glass half empty.
We will assess AI applications in view of the following objectives. We believe that AI should:
The expanded reach of new technologies increasingly touches society as a whole. Advances in AI will have transformative impacts in a wide range of fields, including healthcare, security, energy, transportation, manufacturing, and entertainment.
AI algorithms and datasets can reflect, reinforce, or reduce unfair biases. We recognize that distinguishing fair from unfair biases is not always simple, and differs across cultures and societies. We will seek to avoid unjust impacts on people, particularly those related to sensitive characteristics such as race, ethnicity, gender, nationality, income, sexual orientation, ability, and political or religious belief.
We will continue to develop and apply strong safety and security practices to avoid unintended results that create risks of harm.
We will design AI systems that provide appropriate opportunities for feedback, relevant explanations, and appeal.
We will incorporate our privacy principles in the development and use of our AI technologies. We will give opportunity for notice and consent, encourage architectures with privacy safeguards, and provide appropriate transparency and control over the use of data.
Technological innovation is rooted in the scientific method and a commitment to open inquiry, intellectual rigor, integrity, and collaboration.
Many technologies have multiple uses. We will work to limit potentially harmful or abusive applications.